Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design doc of fixed-point quantization. #10553

Merged
merged 5 commits into from
Jul 11, 2018

Conversation

qingqing01
Copy link
Contributor

@qingqing01 qingqing01 commented May 10, 2018

@qingqing01 qingqing01 force-pushed the quantization_doc branch 3 times, most recently from 9de2945 to 02508da Compare May 23, 2018 05:28
@qingqing01 qingqing01 changed the title [WIP] Design doc of fixed-point quantization. Design doc of fixed-point quantization. May 23, 2018
Copy link
Contributor

@kuke kuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the design doc. Some comments about the writing

@@ -0,0 +1,115 @@
Fixed-point quantization is to use lower bit, for example, 2 bit, 3 bit or 8 bit fixed-point to represent weights and activations, which usually are singe float point with 32 bit. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially import for the inference in embedded device deployment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is to use lower bit --> uses lower bits
2 bit, 3 bit or 8 bit fixed-point --> 2-bit, 3-bit or 8-bit fixed point
singe float point with 32 bit --> in single-precision float-point format with 32 bits
import --> important
embedded device --> embedded-device

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,115 @@
Fixed-point quantization is to use lower bit, for example, 2 bit, 3 bit or 8 bit fixed-point to represent weights and activations, which usually are singe float point with 32 bit. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially import for the inference in embedded device deployment.

According some experiments, the apporach to quantize the model trained in float point directly works sufficiently on the large model, like the over-parameterized VGG model. But the accuracy drops a lot for the small model. In order to improve the tradeoff be-tween accuracy and latency, many quantized training apporaches are proposed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According --> According to
sufficiently --> effectively
on the large model -> on the large models
like the over-parameterized VGG model --> like the VGG model having many parameters. The word over-parameterized has other meanings.
be-tween --> between

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


According some experiments, the apporach to quantize the model trained in float point directly works sufficiently on the large model, like the over-parameterized VGG model. But the accuracy drops a lot for the small model. In order to improve the tradeoff be-tween accuracy and latency, many quantized training apporaches are proposed.

This document is to design a quantized training framework on Fluid. The first part will introduce how to quantize, The second part will describe the quantized training framework. The last part will describe how to the quantization range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last part will describe how to the quantization range --> The last part will illustrate how to calculate the quantization range

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


### How to quantize

There are many ways to quantizate the float value to fixed-point value. For example:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quantizate --> quantize

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

where, $x$ is the float value to be quantized, $[a, b]$ is the quantization range, $a$ is the minimum value and $b$ is the maximal value. $\left \lfloor \right \rceil$ denotes rounding to the nearest integer. If the quantization level is $k$, $n$ is $2^k$, for example, $k$ is 8 and $n$ is 256. $q$ is the quantized integer.


The quantization we apllied is parameterized by the number of quantization levels and maximum absolute value:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apllied --> applied

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


#### Backward pass

See the figure 3. The gradients are calculated by dequantized weights and activations. All inputs and outputs are float point with 32 bit. And in the weight updating process, the gradients will be added to the original weight, not the quantized or dequantized weights.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the figure 3 --> See Figure 3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


So the quantization transipler will change some inputs of the corresponding backward operators.

### How to calculate quantization scale
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scale or range, should they be consistent?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use scale, change the description above.


### How to calculate quantization scale

There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy is to calculate the quantization scale value each iteration. The static strategy is to fix the quantization scale for different inputs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is to calculate --> calculates
is to fix the quantization scale --> keeps the same quantization scale

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy is to calculate the quantization scale value each iteration. The static strategy is to fix the quantization scale for different inputs.

For weights, we apply the dynamic strategy for weights in the training, that is to say, the quantization scale will recalculate during each iteration until the traning is finished.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove for weights
will recalculate --> will be recalculated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


For weights, we apply the dynamic strategy for weights in the training, that is to say, the quantization scale will recalculate during each iteration until the traning is finished.

For activations, the quantization scales are estimated during training, then use them in inference. There are several different ways to estimat:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then use them --> then used
estimat --> estimate them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


</p>

We use this equivalent workflow in the training. In our desigin, there is a quantization transipler to insert the quantization operator and the de-quantization operator in the Fluid `ProgramDesc`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transipler -> transpiler

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


</p>

So the quantization transipler will change some inputs of the corresponding backward operators.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems the current plan is to insert quant op first, generate backward, then change backward ops.
What if we first generate backward, then insert quant ops in forward? Then we don't need to update backward ops?

Copy link
Contributor Author

@qingqing01 qingqing01 May 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we first generate backward, then insert quant ops in forward?

Yeah, the implementation in https://github.com/PaddlePaddle/Paddle/pull/10693/files is this way. The usage is like:

main = fluid.Program()
startup = fluid.Program()
with fluid.program_guard(main, startup):
    loss = network(3)
    opt = fluid.optimizer.Adam(learning_rate=0.001)
    opt.minimize(loss)
    t = fluid.QuantizeTranspiler()
    t.transpile(main)

Since the backward needs to use the dequantized weights and activations, see the Figure 3 in https://github.com/qingqing01/Paddle/blob/quantization_doc/doc/fluid/design/quantization/fixed_point_quantization.md , we still need to rewrite the backward ops.

Copy link
Contributor Author

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kuke @panyx0718 Thanks your detailed review. Thanks very much.

@@ -0,0 +1,115 @@
Fixed-point quantization is to use lower bit, for example, 2 bit, 3 bit or 8 bit fixed-point to represent weights and activations, which usually are singe float point with 32 bit. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially import for the inference in embedded device deployment.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,115 @@
Fixed-point quantization is to use lower bit, for example, 2 bit, 3 bit or 8 bit fixed-point to represent weights and activations, which usually are singe float point with 32 bit. The fixed-point representation has advantages in reducing memory bandwidth, lowering power consumption and computational resources as well as the model storage requirements. It is especially import for the inference in embedded device deployment.

According some experiments, the apporach to quantize the model trained in float point directly works sufficiently on the large model, like the over-parameterized VGG model. But the accuracy drops a lot for the small model. In order to improve the tradeoff be-tween accuracy and latency, many quantized training apporaches are proposed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


According some experiments, the apporach to quantize the model trained in float point directly works sufficiently on the large model, like the over-parameterized VGG model. But the accuracy drops a lot for the small model. In order to improve the tradeoff be-tween accuracy and latency, many quantized training apporaches are proposed.

This document is to design a quantized training framework on Fluid. The first part will introduce how to quantize, The second part will describe the quantized training framework. The last part will describe how to the quantization range.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


### How to quantize

There are many ways to quantizate the float value to fixed-point value. For example:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

where, $x$ is the float value to be quantized, $[a, b]$ is the quantization range, $a$ is the minimum value and $b$ is the maximal value. $\left \lfloor \right \rceil$ denotes rounding to the nearest integer. If the quantization level is $k$, $n$ is $2^k$, for example, $k$ is 8 and $n$ is 256. $q$ is the quantized integer.


The quantization we apllied is parameterized by the number of quantization levels and maximum absolute value:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


</p>

So the quantization transipler will change some inputs of the corresponding backward operators.
Copy link
Contributor Author

@qingqing01 qingqing01 May 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we first generate backward, then insert quant ops in forward?

Yeah, the implementation in https://github.com/PaddlePaddle/Paddle/pull/10693/files is this way. The usage is like:

main = fluid.Program()
startup = fluid.Program()
with fluid.program_guard(main, startup):
    loss = network(3)
    opt = fluid.optimizer.Adam(learning_rate=0.001)
    opt.minimize(loss)
    t = fluid.QuantizeTranspiler()
    t.transpile(main)

Since the backward needs to use the dequantized weights and activations, see the Figure 3 in https://github.com/qingqing01/Paddle/blob/quantization_doc/doc/fluid/design/quantization/fixed_point_quantization.md , we still need to rewrite the backward ops.


So the quantization transipler will change some inputs of the corresponding backward operators.

### How to calculate quantization scale
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use scale, change the description above.


For weights, we apply the dynamic strategy for weights in the training, that is to say, the quantization scale will recalculate during each iteration until the traning is finished.

For activations, the quantization scales are estimated during training, then use them in inference. There are several different ways to estimat:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy is to calculate the quantization scale value each iteration. The static strategy is to fix the quantization scale for different inputs.

For weights, we apply the dynamic strategy for weights in the training, that is to say, the quantization scale will recalculate during each iteration until the traning is finished.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


### How to calculate quantization scale

There are two strategies to calculate quantization scale, we call them dynamic and static strategy. The dynamic strategy is to calculate the quantization scale value each iteration. The static strategy is to fix the quantization scale for different inputs.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@qingqing01
Copy link
Contributor Author

@panyx0718 @kuke Is there any problem about this PR ?

Copy link
Contributor

@kuke kuke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@qingqing01 qingqing01 merged commit 79d797f into PaddlePaddle:develop Jul 11, 2018
kuke pushed a commit to kuke/Paddle that referenced this pull request Aug 25, 2018
* Design doc of fixed-point quantization.
* Update fixed point quantization desigin doc.
* Fix doc format.
* Update the backward part.
* Fix the grammatical.
@qingqing01 qingqing01 deleted the quantization_doc branch November 14, 2019 05:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Write fixed-point quantization design.
3 participants